Goto

Collaborating Authors

 image reconstruction method


Conformalized Generative Bayesian Imaging: An Uncertainty Quantification Framework for Computational Imaging

Ekmekci, Canberk, Cetin, Mujdat

arXiv.org Artificial Intelligence

Uncertainty quantification plays an important role in achieving trustworthy and reliable learning-based computational imaging. Recent advances in generative modeling and Bayesian neural networks have enabled the development of uncertainty-aware image reconstruction methods. Current generative model-based methods seek to quantify the inherent (aleatoric) uncertainty on the underlying image for given measurements by learning to sample from the posterior distribution of the underlying image. On the other hand, Bayesian neural network-based approaches aim to quantify the model (epistemic) uncertainty on the parameters of a deep neural network-based reconstruction method by approximating the posterior distribution of those parameters. Unfortunately, an ongoing need for an inversion method that can jointly quantify complex aleatoric uncertainty and epistemic uncertainty patterns still persists. In this paper, we present a scalable framework that can quantify both aleatoric and epistemic uncertainties. The proposed framework accepts an existing generative model-based posterior sampling method as an input and introduces an epistemic uncertainty quantification capability through Bayesian neural networks with latent variables and deep ensembling. Furthermore, by leveraging the conformal prediction methodology, the proposed framework can be easily calibrated to ensure rigorous uncertainty quantification. We evaluated the proposed framework on magnetic resonance imaging, computed tomography, and image inpainting problems and showed that the epistemic and aleatoric uncertainty estimates produced by the proposed framework display the characteristic features of true epistemic and aleatoric uncertainties. Furthermore, our results demonstrated that the use of conformal prediction on top of the proposed framework enables marginal coverage guarantees consistent with frequentist principles.


Estimating Task-based Performance Bounds for Accelerated MRI Image Reconstruction Methods by Use of Learned-Ideal Observers

Li, Kaiyan, Kc, Prabhat, Li, Hua, Myers, Kyle J., Anastasio, Mark A., Zeng, Rongping

arXiv.org Machine Learning

Medical imaging systems are commonly assessed and optimized by the use of objective measures of image quality (IQ). The performance of the ideal observer (IO) acting on imaging measurements has long been advocated as a figure-of-merit to guide the optimization of imaging systems. For computed imaging systems, the performance of the IO acting on imaging measurements also sets an upper bound on task-performance that no image reconstruction method can transcend. As such, estimation of IO performance can provide valuable guidance when designing under-sampled data-acquisition techniques by enabling the identification of designs that will not permit the reconstruction of diagnostically inappropriate images for a specified task - no matter how advanced the reconstruction method is or how plausible the reconstructed images appear. The need for such analysis is urgent because of the substantial increase of medical device submissions on deep learning-based image reconstruction methods and the fact that they may produce clean images disguising the potential loss of diagnostic information when data is aggressively under-sampled. Recently, convolutional neural network (CNN) approximated IOs (CNN-IOs) was investigated for estimating the performance of data space IOs to establish task-based performance bounds for image reconstruction, under an X-ray computed tomographic (CT) context. In this work, the application of such data space CNN-IO analysis to multi-coil magnetic resonance imaging (MRI) systems has been explored. This study utilized stylized multi-coil sensitivity encoding (SENSE) MRI systems and deep-generated stochastic brain models to demonstrate the approach.


WAVE-UNET: Wavelength based Image Reconstruction method using attention UNET for OCT images

Viqar, Maryam, Sahin, Erdem, Madjarova, Violeta, Stoykova, Elena, Hong, Keehoon

arXiv.org Artificial Intelligence

In this work, we propose to leverage a deep-learning (DL) based reconstruction framework for high quality Swept-Source Optical Coherence Tomography (SS-OCT) images, by incorporating wavelength ({\lambda}) space interferometric fringes. Generally, the SS-OCT captured fringe is linear in wavelength space and if Inverse Discrete Fourier Transform (IDFT) is applied to extract depth-resolved spectral information, the resultant images are blurred due to the broadened Point Spread Function (PSF). Thus, the recorded wavelength space fringe is to be scaled to uniform grid in wavenumber (k) space using k-linearization and calibration involving interpolations which may result in loss of information along with increased system complexity. Another challenge in OCT is the speckle noise, inherent in the low coherence interferometry-based systems. Hence, we propose a systematic design methodology WAVE-UNET to reconstruct the high-quality OCT images directly from the {\lambda}-space to reduce the complexity. The novel design paradigm surpasses the linearization procedures and uses DL to enhance the realism and quality of raw {\lambda}-space scans. This framework uses modified UNET having attention gating and residual connections, with IDFT processed {\lambda}-space fringes as the input. The method consistently outperforms the traditional OCT system by generating good-quality B-scans with highly reduced time-complexity.


A Mathematical Framework for MRI "Hallucinations"

#artificialintelligence

Machine-learning methods are being actively developed for computed imaging systems like MRI. However, these methods occasionally introduce false, unexplainable structures in images, known as hallucinations, that can lead to incorrect diagnoses. Researchers at the Beckman Institute for Advanced Science and Technology and the Computational Imaging Science Laboratory have defined a mathematical framework for identifying hallucinations, a first step toward reducing their frequency. This work, "On hallucinations in tomographic image reconstruction," is published in IEEE Transactions on Medical Imaging in a special issue on machine learning methods for image reconstruction. Most modern medical imaging devices -- such as MRI, computed tomography, and PET -- do not record images directly.


Toward a Thinking Microscope: Deep Learning in Optical Microscopy and Image Reconstruction

Rivenson, Yair, Ozcan, Aydogan

arXiv.org Machine Learning

Yair Rivenson and Aydogan Ozcan Electrical & Computer Engineering Department, University of California, Los Angeles, CA, 90095, USA Bioengineering Department, University of California, Los Angeles, CA, 90095, USA California NanoSystems Institute, University of California, Los Angeles, CA, 90095, USA http://innovate.ee.ucla.edu/welcome.html Abstract: We discuss recently emerging applications of the state-of-art deep learning methods on optical microscopy and microscopic image reconstruction, which enable new transformations among different modes and modalities of microscopic imaging, driven entirely by image data. We believe that deep learning will fundamentally change both the hardware and image reconstruction methods used in optical microscopy in a holistic manner. Recent results in applications of deep learning [1] have proven to be transformative for various fields, redefining the state of the art results achieved by earlier machine learning techniques. As an example, one of the fields that has significantly benefited from the ongoing deep learning revolution is machine vision, with landmark results that enable new capabilities in autonomous cars, fault analysis, security applications, as well as entertainment.